69 research outputs found

    Two component dark matter with multi-Higgs portals

    Full text link
    With the assistance of two extra groups, i.e., an extra hidden gauge group SU(2)DSU(2)_D and a global U(1)U(1) group, we propose a two component dark matter (DM) model. After the symmetry SU(2)D×U(1)SU(2)_D\times U(1) being broken, we obtain both the vector and scalar DM candidates. The two DM candidates communicate with the standard model (SM) via three Higgs as multi-Higgs portals. The three Higgs are mixing states of the SM Higgs, the Higgs of the hidden sector and real part of a supplement complex scalar singlet. We study relic density and direct detection of DM in three scenarios. The resonance behaviors and interplay between the two component DM candidates are represented through investigating of the relic density in the parameter spaces of the two DMs masses. The electroweak precision parameters constrains the two Higgs portals couplings (λm\lambda_m and δ2\delta_2). The relevant vacuum stability and naturalness problem in the parameter space of λm\lambda_m and δ2\delta_2 are studied as well. The model could alleviate these two problems in some parameter spaces under the constraints of electroweak precision observables and Higgs indirect search.Comment: 27 pages, 16 figures. Version accepted for publication in JHE

    The Application of Two-level Attention Models in Deep Convolutional Neural Network for Fine-grained Image Classification

    Full text link
    Fine-grained classification is challenging because categories can only be discriminated by subtle and local differences. Variances in the pose, scale or rotation usually make the problem more difficult. Most fine-grained classification systems follow the pipeline of finding foreground object or object parts (where) to extract discriminative features (what). In this paper, we propose to apply visual attention to fine-grained classification task using deep neural network. Our pipeline integrates three types of attention: the bottom-up attention that propose candidate patches, the object-level top-down attention that selects relevant patches to a certain object, and the part-level top-down attention that localizes discriminative parts. We combine these attentions to train domain-specific deep nets, then use it to improve both the what and where aspects. Importantly, we avoid using expensive annotations like bounding box or part information from end-to-end. The weak supervision constraint makes our work easier to generalize. We have verified the effectiveness of the method on the subsets of ILSVRC2012 dataset and CUB200_2011 dataset. Our pipeline delivered significant improvements and achieved the best accuracy under the weakest supervision condition. The performance is competitive against other methods that rely on additional annotations

    Fine-grained Discriminative Localization via Saliency-guided Faster R-CNN

    Full text link
    Discriminative localization is essential for fine-grained image classification task, which devotes to recognizing hundreds of subcategories in the same basic-level category. Reflecting on discriminative regions of objects, key differences among different subcategories are subtle and local. Existing methods generally adopt a two-stage learning framework: The first stage is to localize the discriminative regions of objects, and the second is to encode the discriminative features for training classifiers. However, these methods generally have two limitations: (1) Separation of the two-stage learning is time-consuming. (2) Dependence on object and parts annotations for discriminative localization learning leads to heavily labor-consuming labeling. It is highly challenging to address these two important limitations simultaneously. Existing methods only focus on one of them. Therefore, this paper proposes the discriminative localization approach via saliency-guided Faster R-CNN to address the above two limitations at the same time, and our main novelties and advantages are: (1) End-to-end network based on Faster R-CNN is designed to simultaneously localize discriminative regions and encode discriminative features, which accelerates classification speed. (2) Saliency-guided localization learning is proposed to localize the discriminative region automatically, avoiding labor-consuming labeling. Both are jointly employed to simultaneously accelerate classification speed and eliminate dependence on object and parts annotations. Comparing with the state-of-the-art methods on the widely-used CUB-200-2011 dataset, our approach achieves both the best classification accuracy and efficiency.Comment: 9 pages, to appear in ACM MM 201
    • …
    corecore